Convergence properties of gradient descent noise reduction

نویسندگان
چکیده

برای دانلود باید عضویت طلایی داشته باشید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Convergence properties of gradient descent noise reduction

Gradient descent noise reduction is a technique that attempts to recover the true signal, or trajectory, from noisy observations of a non-linear dynamical system for which the dynamics are known. This paper provides the first rigorous proof that the algorithm will recover the original trajectory for a broad class of dynamical systems under certain conditions. The proof is obtained using ideas f...

متن کامل

Shadowing Pseudo-Orbits and Gradient Descent Noise Reduction

Shadowing trajectories are one of the most powerful ideas of modern dynamical systems theory, providing a tool for proving some central theorems and a means to assess the relevance of models and numerically computed trajectories of chaotic systems. Shadowing has also been seen to have a role in state estimation and forecasting of nonlinear systems. Shadowing trajectories are guaranteed to exist...

متن کامل

Convergence Analysis of Gradient Descent Stochastic Algorithms

This paper proves convergence of a sample-path based stochastic gradient-descent algorithm for optimizing expected-value performance measures in discrete event systems. The algorithm uses increasing precision at successive iterations, and it moves against the direction of a generalized gradient of the computed sample performance function. Two convergence results are established: one, for the ca...

متن کامل

On the Convergence of Decentralized Gradient Descent

Consider the consensus problem of minimizing f(x) = ∑n i=1 fi(x) where each fi is only known to one individual agent i belonging to a connected network of n agents. All the agents shall collaboratively solve this problem and obtain the solution via data exchanges only between neighboring agents. Such algorithms avoid the need of a fusion center, offer better network load balance, and improve da...

متن کامل

Convergence of Gradient Descent on Separable Data

The implicit bias of gradient descent is not fully understood even in simple linear classification tasks (e.g., logistic regression). Soudry et al. (2018) studied this bias on separable data, where there are multiple solutions that correctly classify the data. It was found that, when optimizing monotonically decreasing loss functions with exponential tails using gradient descent, the linear cla...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

ژورنال

عنوان ژورنال: Physica D: Nonlinear Phenomena

سال: 2002

ISSN: 0167-2789

DOI: 10.1016/s0167-2789(02)00376-7